3 research outputs found

    Visuelle Aufmerksamkeitssteuerung zur Unterstützung gestikbasierter Mensch-Maschine Interaktion

    No full text
    Rae R, Fislage M, Ritter H. Visuelle Aufmerksamkeitssteuerung zur Unterstützung gestikbasierter Mensch-Maschine Interaktion. KI - Künstliche Intelligenz, Themenheft Aktive Sehsysteme. 1999;1:18-24

    Using Visual Attention to Recognize Human Pointing Gestures in Assembly Tasks

    No full text
    . Humans often use hand gestures to instruct other persons, e.g., to grasp an object or to look at a certain location. Especially in assembly tasks pointing gestures can simplify the cooperation between man and machine. We use visual attention to control the active cameras of the system to fixate interesting image regions, e.g., assembly parts. Furthermore, the system reacts on the appearance of human hands and recognizes the direction of pointing gestures made by the user. This information is used to guide the viewing direction of the artificial observer. We describe both, the multi-layer attention model and the hand gesture recognition. Results show the reliability of our adaptive system, which robustly recognizes pointing gestures made by different users. Keywords Human-machine interface, pointing gesture recognition, computer vision, active vision, visual attention, neural networks 1 Motivation Individuals frequently use hand gestures in communication to guide the attention of an..

    Observation of Human Eye Movements to Simulate Visual Exploration of Complex Scenes

    No full text
    The human eyes are always in action, they explore the environment every second we are awake. But what attracts our visual attention? In this paper we examine the eye movements of human subjects observing a breakfast scenario using an eye-tracking system. Moreover, we developed a hierarchical model consisting of three modules. This model is applied to the same visual stimuli and generates saccades depending on an empirically based set of image features extracted from the input image. The architecture of our model is motivated by the anatomy of the human visual pathway and the results from the eye-tracking experiment. The model uses low-level image features, extracted by filters similar to those found in the receptive fields of cortical neurons in mammals. The filter responses are combined in the adaptive multi-layered cortical map module. The activity in the saliency map of this first module roughly estimates the next fixation point. The second module refines this point using a neural n..
    corecore